Skip to content

Autoparser - complete refactoring of parser architecture#1376

Open
firecoperana wants to merge 5 commits intomainfrom
fcp/auto_parser2
Open

Autoparser - complete refactoring of parser architecture#1376
firecoperana wants to merge 5 commits intomainfrom
fcp/auto_parser2

Conversation

@firecoperana
Copy link
Copy Markdown
Collaborator

@firecoperana firecoperana commented Mar 6, 2026

Port the new Autoparser and optional argument reshuffle capability PR from mainline

ggml-org/llama.cpp#18675 and ggml-org/llama.cpp#20171

Continues #1369

Include fix of minimax and deepseek v3.2 tool calls.
Add parser for gemma 4.

Add the following command line options:
--reasoning: on|off|auto, control reasoning on and off
--reasoning-budget: token budget for thinking: -1 for unrestricted, 0 for immediate end, N>0 for token budget (default: -1)
--reasoning-budget-message: message injected before the end-of-thinking tag when reasoning budget is exhausted (default: none)
--parallel-tool-calls: enable parallel tool calls
--skip-chat-parsing: force a pure content parser, even if a Jinja template is specified; model will output everything

@firecoperana
Copy link
Copy Markdown
Collaborator Author

Streaming for tool call is enabled.

@ikawrakow
Copy link
Copy Markdown
Owner

@ikawrakow Can you merge the PEG parser PR first and then this one? This is a large PR and I don't want to squash them into one commit.

So is the PEG parser PR. Are we confident that there are no regressions?

@firecoperana
Copy link
Copy Markdown
Collaborator Author

I see some new issues with auto parser mentioned in mainline. In PEG parser PR, PEG is just added without being used. It's mostly the new jinja template engine changes that matters. It's been added there for a while, so most bugs should have been fixed.

@hksdpc255
Copy link
Copy Markdown
Contributor

hksdpc255 commented Mar 8, 2026

These changes in mainline llama.cpp appear to work well. All models except MiroThinker (which was added by me) have already been tested by upstream developers.

In principle, this should not introduce regressions, unless there are additional unmerged differences between mainline and ik_llama.cpp. I'll test this branch.

@hksdpc255
Copy link
Copy Markdown
Contributor

hksdpc255 commented Mar 9, 2026

This breaks MiroThinker, but seems an upstream bug

log:

error: the supplied chat template is not supported: /var/llm/MiroThinker.jinja
error: invalid parameter for argument: --chat-template-file
common_chat_templates_init: error: Index 1 out of bounds for array of size 0
common_chat_templates_init: failed to initialize chat template
common_chat_templates_init: please consider disabling jinja via --no-jinja, or using another chat template
common_chat_verify_template: failed to apply template: std::exception

@firecoperana
Copy link
Copy Markdown
Collaborator Author

There are more issues showing up in mainline. I will wait before they are full resolved.

@firecoperana firecoperana force-pushed the fcp/auto_parser2 branch 2 times, most recently from 796aa23 to 59233a7 Compare March 10, 2026 22:54
@sayap
Copy link
Copy Markdown
Contributor

sayap commented Mar 11, 2026

In examples/server/server-common.cpp, can we change parallel_tool_calls to default to true? It is harder than I thought to try setting this as a request parameter in various agentic coding tools 😬

This capability is from late 2024, so all the newer models should already support this.

@firecoperana
Copy link
Copy Markdown
Collaborator Author

I could add command line arg to enable it.

@hksdpc255
Copy link
Copy Markdown
Contributor

hksdpc255 commented Mar 12, 2026

@firecoperana I'm planning to send PRs to upstream llama.cpp adding support for MiroThinker with the Refactored chat template. I have tested this patch will work. You can merge it.

diff --git a/common/chat.cpp b/common/chat.cpp
index b799912a..7a76f8a9 100644
--- a/common/chat.cpp
+++ b/common/chat.cpp
@@ -1278,6 +1278,116 @@ static common_chat_params common_chat_params_init_kimi_k2(const common_chat_temp
     return data;
 }
 
+// MiroThinker - uses MCP style toolcalling
+static common_chat_params common_chat_params_init_mirothinker(const common_chat_template &    tmpl,
+                                                          const autoparser::templates_params & inputs) {
+    common_chat_params data;
+
+    data.prompt             = common_chat_template_direct_apply(tmpl, inputs);
+    data.format             = COMMON_CHAT_FORMAT_PEG_NATIVE;
+    data.supports_thinking  = true;
+    data.thinking_start_tag = "<think>";
+    data.thinking_end_tag   = "</think>";
+    data.preserved_tokens  = {
+        "<think>",
+        "</think>",
+    };
+
+    auto has_tools         = inputs.tools.is_array() && !inputs.tools.empty();
+    auto extract_reasoning = inputs.reasoning_format != COMMON_REASONING_FORMAT_NONE;
+    auto include_grammar   = has_tools && inputs.tool_choice != COMMON_CHAT_TOOL_CHOICE_NONE;
+
+    auto parser = build_chat_peg_parser([&](common_chat_peg_builder & p) {
+        // MiroThinker Thinking format:
+        // - Reasoning: <think>{reasoning}</think>
+        // - Content: text after reasoning
+        // - Tool calls section:
+        //   <use_mcp_tool>
+        //   <server_name>{server_name}</server_name>
+        //   <tool_name>{tool_name}</tool_name>
+        //   <arguments>
+        //   {json_args}
+        //   </arguments>
+        //   ...
+        //   </use_mcp_tool>
+
+        auto reasoning = extract_reasoning ? p.optional("<think>" + p.reasoning(p.until("</think>")) + "</think>") : p.eps();
+
+        // Tool call markers
+        const std::string SECTION_BEGIN = "<use_mcp_tool>";
+        const std::string SECTION_END   = "</use_mcp_tool>";
+        const std::string CALL_BEGIN    = "<server_name>";
+        const std::string ARGS_BEGIN    = "<arguments>";
+        const std::string CALL_END      = "</arguments>";
+
+        auto end = p.end();
+
+        // Content only parser (no tools)
+        if (!has_tools || inputs.tool_choice == COMMON_CHAT_TOOL_CHOICE_NONE) {
+            return reasoning + p.content(p.rest()) + end;
+        }
+
+        // Build tool call parsers for each available function
+        // Function name format is: <tool_name>{tool_name}</tool_name>
+        // We need to match: {what_ever}</server_name>{spaces}<tool_name>{tool_name}</tool_name>
+        auto tool_choice = p.choice();
+        foreach_function(inputs.tools, [&](const json & tool) {
+            const auto & function = tool.at("function");
+            std::string  name     = function.at("name");
+            const auto & schema   = function.at("parameters");
+
+            // Match: {what_ever}</server_name>{spaces}<tool_name>{tool_name}</tool_name>
+            auto tool_parser = p.tool(
+                p.tool_open(
+                    p.until("</server_name>") +
+                    p.literal("</server_name>") +
+                    p.space() +
+                    p.literal("<tool_name>") +
+                    p.tool_name(p.literal(name)) +
+                    p.literal(ARGS_BEGIN)
+                ) + p.space() +
+                p.tool_args(p.schema(p.json(), "tool-" + name + "-schema", schema)) +
+                p.space() + p.tool_close(p.literal(CALL_END))
+            );
+
+            tool_choice |= p.rule("tool-" + name, tool_parser);
+        });
+
+        // Tool calls section: <use_mcp_tool> tool_calls </use_mcp_tool>
+        auto min_calls  = inputs.tool_choice == COMMON_CHAT_TOOL_CHOICE_REQUIRED ? 1 : 0;
+        auto max_calls  = inputs.parallel_tool_calls ? -1 : 1;
+        auto tool_calls = p.trigger_rule("tool-calls",
+            p.literal(SECTION_BEGIN) + p.space() +
+            p.rule("tool-call", p.repeat(CALL_BEGIN + tool_choice, min_calls, max_calls) +
+            p.space() + p.literal(SECTION_END))
+        );
+
+        auto content_before_tools = p.content(p.until(SECTION_BEGIN));
+
+        return reasoning + content_before_tools + tool_calls + end;
+    });
+
+    data.parser = parser.save();
+
+    if (include_grammar) {
+        data.grammar_lazy = inputs.tool_choice == COMMON_CHAT_TOOL_CHOICE_AUTO;
+        data.grammar      = build_grammar([&](const common_grammar_builder & builder) {
+            foreach_function(inputs.tools, [&](const json & tool) {
+                const auto & function = tool.at("function");
+                auto         schema   = function.at("parameters");
+                builder.resolve_refs(schema);
+            });
+            parser.build_grammar(builder, data.grammar_lazy);
+        });
+
+        data.grammar_triggers = {
+            { COMMON_GRAMMAR_TRIGGER_TYPE_WORD, "<use_mcp_tool>" }
+        };
+    }
+
+    return data;
+}
+
 // LFM2 format:
 // - Reasoning: <think>{reasoning}</think> (optional, only if enable_thinking is true)
 // - Content: text after reasoning (optional)
@@ -1517,6 +1627,14 @@ static common_chat_params common_chat_templates_apply_jinja(const struct common_
         return common_chat_params_init_kimi_k2(tmpl, params);
     }
 
+    // MiroThinker - uses MCP style toolcalling <use_mcp_tool> ... </use_mcp_tool>
+    // Detection: template has "</use_mcp_tool>" and "</server_name>"
+    if (src.find("</use_mcp_tool>") != std::string::npos &&
+        src.find("</server_name>") != std::string::npos) {
+        LOG_DBG("Using specialized template: MiroThinker\n");
+        return common_chat_params_init_mirothinker(tmpl, params);
+    }
+
     // LFM2 - uses <|tool_list_start|>/<|tool_list_end|> markers and <|tool_call_start|>[name(args)]<|tool_call_end|> format
     // Detection: template has "<|tool_list_start|>" and "<|tool_list_end|>" markers
     if (src.find("<|tool_list_start|>") != std::string::npos &&

@firecoperana
Copy link
Copy Markdown
Collaborator Author

Add support for MiroThinker with the new template engine as well as add --parallel-tool-calls to enable parallel tool calls.

@firecoperana
Copy link
Copy Markdown
Collaborator Author

Converting to draft as I see more issues with this PR coming up in mainline.

@firecoperana firecoperana marked this pull request as draft March 14, 2026 16:35
common/chat.h Outdated
Comment on lines +220 to +221
std::string thinking_start_tag;
std::string thinking_end_tag;
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is thinking_start_tag and thinking_end_tag really useful in here?

In mainline llama.cpp, I add data.thinking_start_tag = "<think>"; in common_chat_params_init_mirothinker because other parser (e.g. kimi_k2_thinking) also add it. When port back to ik_llama.cpp, I choose to delete data.thinking_start_tag = "<think>"; and data.thinking_end_tag = "</think>";. it works, too.

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

After applying your patch, the code won't compile without it.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I means to both remove here

ik_llama.cpp/common/chat.cpp

Lines 1285 to 1286 in 4fc7a15

data.thinking_start_tag = "<think>";
data.thinking_end_tag = "</think>";

and here

ik_llama.cpp/common/chat.h

Lines 220 to 221 in 4fc7a15

std::string thinking_start_tag;
std::string thinking_end_tag;

But I'm not quite sure why ik_llama.cpp do not have this field in common_chat_params.

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No idea either. Maybe they were not ported.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
std::string thinking_start_tag;
std::string thinking_end_tag;

Comment on lines +1285 to +1286
data.thinking_start_tag = "<think>";
data.thinking_end_tag = "</think>";
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
data.thinking_start_tag = "<think>";
data.thinking_end_tag = "</think>";

@hksdpc255
Copy link
Copy Markdown
Contributor

After removing thinking_start_tag and thinking_end_tag, it compiles successfully. It seems these fields are related to functionalities that ik_llama.cpp does not currently support.

@milpster
Copy link
Copy Markdown

Using this with Qwen3.5 and claude code, i get random stops that i cannot explain. Attached is a log of one.
log.log

@hksdpc255
Copy link
Copy Markdown
Contributor

@firecoperana Could you rebase this branch again? Seems main branch receive lots of refractor makes this PR cannot easily apply locally.

@firecoperana
Copy link
Copy Markdown
Collaborator Author

Rebased, but no new commit from mainline.

@vikcious
Copy link
Copy Markdown

vikcious commented Apr 3, 2026

Any chance to merge this to mainline? Currently the ik_llama doesn't do good with a very nice model (MiniMax-2.5), throwing: "render_message_to_json: Neither string content nor typed content is supported by the template. This is unexpected and may lead to issues." and making the model pretty much useless.
Switching to pw_llama.cpp is an option but being a fork of llama.cpp mainline I really miss the "-sm graph" speedy option.

P.S. Much to my suprise (asume: n00b) the model would fail with ik_llama while it doesn't properly support -khad / -vhad as options with "-sm graph". The template error persists but has no impact on the actual task in Opencode.

Apologies!

@firecoperana
Copy link
Copy Markdown
Collaborator Author

I just rebased on the latest main. How stable is the autoparser in mainline now? Last time I checked, there were still some regression, so I want to wait for a while before merging all the fixes they have.

@vikcious
Copy link
Copy Markdown

vikcious commented Apr 3, 2026

I just rebased on the latest main. How stable is the autoparser in mainline now? Last time I checked, there were still some regression, so I want to wait for a while before merging all the fixes they have.

I cannot vouch for a "bit-to-bit" mainline "autoparser" vs the original (https://github.com/pwilkin/llama.cpp/tree/autoparser) but the mainline still doesn't get the right, with MiniMax-2.5 at least, while using @pwilkin fork works nicely and respects all & co.

E.g., with mainline:
"Now I'll update the todo and create the HTML frontend.
"

@firecoperana
Copy link
Copy Markdown
Collaborator Author

Are you saying for MiniMax-2.5, the original autoparser merged in llama.cpp performs better than their latest commits?

@vikcious
Copy link
Copy Markdown

vikcious commented Apr 3, 2026

Are you saying for MiniMax-2.5, the original autoparser merged in llama.cpp performs better than their latest commits?

Nope, I am saying that the original, reworked autoparser from https://github.com/pwilkin/llama.cpp/tree/autoparser, by @pwilkin is still the true source, and for me works flawlessly every time.

@firecoperana
Copy link
Copy Markdown
Collaborator Author

I don't understand how can it perform differently after it's merged to llama.cpp, but they have a lot of fixes after it's merged. The PR in ik_llama.cpp has a few newer commits too, but not all of them. Give it a try, but your best bet is to create a issue in mainline and let them restore the old behavior.

@hksdpc255
Copy link
Copy Markdown
Contributor

I just rebased on the latest main. How stable is the autoparser in mainline now? Last time I checked, there were still some regression, so I want to wait for a while before merging all the fixes they have.

In my experience, I’ve consistently used this setup with the mainline ik_llama.cpp and haven’t encountered any issues. This might be because my locally deployed models are relatively limited (only Qwen 3/3.5, gpt-oss, and MiroThinker).

@usrlocalben
Copy link
Copy Markdown
Contributor

@hksdpc255

I’ve consistently used this setup with the mainline ik_llama.cpp

In ik_llama threads, mainline generally refers to ggml-org/llama.cpp.

Can you give a full path to disambiguate? i.e. Do you mean

ikawrakow/ik_llama.cpp:main
or
ggml-org/llama.cpp:master

?

@hksdpc255
Copy link
Copy Markdown
Contributor

hksdpc255 commented Apr 5, 2026

I means ik_llama.cpp main branch. Since this PR is somehow a fork of ik_llama.cpp.

@ikawrakow
Copy link
Copy Markdown
Owner

@firecoperana

I see you asking people to try this PR, bit it is still draft. Should we consider merging it?

@firecoperana
Copy link
Copy Markdown
Collaborator Author

I want to sync all relevant commits from mainline before making it ready for review. Plus I still see some regression with how thinking tag is handled in mainline, so let's keep it in separate branch for a while. It should be easy for people to rebase it on main branch if they want to try this PR. I will sync it from time to time too.

@jukofyork
Copy link
Copy Markdown
Contributor

throwing: "render_message_to_json: Neither string content nor typed content is supported by the template. This is unexpected and may lead to issues."

The template error persists but has no impact on the actual task in Opencode.**

I'm just testing GLM-5.1 (without this PR) and also getting this same template error, but so far it doesn't seem to have caused any problems in opencode and everything seems to be working.

@jukofyork
Copy link
Copy Markdown
Contributor

throwing: "render_message_to_json: Neither string content nor typed content is supported by the template. This is unexpected and may lead to issues."
The template error persists but has no impact on the actual task in Opencode.**

I'm just testing GLM-5.1 (without this PR) and also getting this same template error, but so far it doesn't seem to have caused any problems in opencode and everything seems to be working.

Incase anyone else runs into this, the fix is this:

{%- if 'function' in tool -%}
    {%- set tool = tool['function'] -%}
{%- endif -%}

needs changing to this:

{%- if tool.function is defined -%}
    {%- set tool = tool.function -%}
{%- endif -%}

Not sure if it is really having a negative effect, but that message getting spammed was irritating...

Autoparser: add optional argument reshuffle capability

Autoparser: True streaming (#20177)

* Relax atomicity constraint for nicer, more pleasent, True Streaming parsing

* Whitespace

* Remove redundant atomics

Revert to OAI-compatible args (#20213)

* Revert to OAI-compatible args

* Apply workaround::func_args_not_string

Fix structured outputs (#20223)

* Fix structured outputs

* Update common/chat-auto-parser-generator.cpp

Co-authored-by: Aldehir Rojas <hello@alde.dev>

---------

Co-authored-by: Aldehir Rojas <hello@alde.dev>

Fix compile bug (#20203)

* Fix compile bug

* Update common/chat-auto-parser-helpers.cpp

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

---------

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
# Conflicts:
#	common/chat-auto-parser-helpers.cpp

common : gracefully handle incomplete output (#20191)

* common : handle incomplete UTF-8 at end of input in PEG parser

* cont : if reached end prematurely, emit needs_more_input to propagate partial output

* cont: refactor peg parse context to add lenient flag

* cont : remove partial flag, keep lenient flag

PEG parser for LFM2 (#20251)

* PEG parser for LFM2

* Simplify using python_value()

common: map developer role to system (#20215)

* Map developer role to system
* Simplify

common: consolidate PEG string parsers (#20263)

* common : consolidate PEG string parsers
* cont : fix json_string_content()

examples : fix empty items in json_schema_to_grammar.py [no ci] (#19968)

* Fix logic for retrieving schema items in `json_schema_to_grammar.py`

If `schema['items']` is `{}` and `prefixItems not in schema', as `{}` is Falsy, the original code here will raise an error.

I think if `schema['items']` is `{}`, them items should just be `{}`

* Apply suggestion from @CISC

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* Add tests for arrays with empty items

Add two unit tests to `tests/test-json-schema-to-grammar.cpp` that validate handling of arrays when 'items' is an empty schema and when 'prefixItems' is present alongside an empty 'items'. Both tests expect the same generated grammar, ensuring the JSON Schema->grammar conversion treats an empty 'items' schema (and the presence of 'prefixItems') correctly and covering this edge case.

---------

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

Reduce level of content parser warning message to avoid log spam on non-debug verbosity (#20347)

do not return if template parse failed

add arg to enable parallel tool call

common : fix incorrect uses of stoul (#20313)
# Conflicts:
#	common/arg.cpp
#	src/llama-grammar.cpp

examples : fix empty items in json_schema_to_grammar.py [no ci] (#19968)

* Fix logic for retrieving schema items in `json_schema_to_grammar.py`

If `schema['items']` is `{}` and `prefixItems not in schema', as `{}` is Falsy, the original code here will raise an error.

I think if `schema['items']` is `{}`, them items should just be `{}`

* Apply suggestion from @CISC

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* Add tests for arrays with empty items

Add two unit tests to `tests/test-json-schema-to-grammar.cpp` that validate handling of arrays when 'items' is an empty schema and when 'prefixItems' is present alongside an empty 'items'. Both tests expect the same generated grammar, ensuring the JSON Schema->grammar conversion treats an empty 'items' schema (and the presence of 'prefixItems') correctly and covering this edge case.

---------

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

Add support for MiroThinker with new jinja template

common/parser: handle reasoning budget (#20297)

* v1

* Finished!

* Handlie cli

* Reasoning sampler

* Apply suggestions from code review

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* Less explosive terminology :)

* Add utf-8 case and tests

* common : migrate reasoning budget sampler to common

* cont : clean up

* cont : expose state and allow passing as initial state

* cont : remove unused imports

* cont : update state machine doc string

---------

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
Co-authored-by: Alde Rojas <hello@alde.dev>

common/parser: use nlohmann::ordered_json to preserve parameter order (#20385)

common/parser: add GigaChatV3/3.1 models support (#19931)

Co-authored-by: Mishusha <pmv26021975@gmail.com>

common/parser: gracefully handle undetected tool parser, print error message. (#20286)

fix: prevent nullptr dereference (#20552)

common : fix iterator::end() dereference (#20445)
# Conflicts:
#	common/regex-partial.cpp

jinja : add capability check for object args (#20612)

common/parser: add `--skip-chat-parsing` to force a pure content parser. (#20289)

* Add `--force-pure-content` to force a pure content parser.

* Update common/arg.cpp

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

common : rework gpt-oss parser (#20393)

* common : rework gpt-oss parser

* cont : fix gpt-oss tests

* cont : add structured output test

* cont : rename final to final_msg

common : fix gpt-oss content removal (#20745)

common/parser: add proper reasoning tag prefill reading (#20424)

* Implement proper prefill extraction

* Refactor cli parameters, update docs, move reasoning budget sampler part to common/reasoning-budget.cpp

* Update tools/server/server-task.cpp

* refactor: move grammars to variant, remove grammar_external, handle exception internally

* Make code less C++y

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

chat : handle tool calls with no required args in TAG_WITH_TAGGED format (#20764)

* chat : handle tool calls with no required args in TAG_WITH_TAGGED format

* Update tests/test-chat.cpp [no ci]

Co-authored-by: Aldehir Rojas <hello@alde.dev>

---------

Co-authored-by: Piotr Wilkin (ilintar) <piotr.wilkin@syndatis.com>
Co-authored-by: Aldehir Rojas <hello@alde.dev>

common/parser : fix out_of_range crash in throw path (#20424 regression) (#20777)

* chat : fix out_of_range crash in throw path (#20424 regression)

#20424 introduced effective_input = generation_prompt + input, but the
throw path uses input.substr(result.end) where result.end is a position
within effective_input. Every thinking model with a non-empty
generation_prompt crashes with std::out_of_range instead of the intended
error message.

Test crashes on unpatched master, passes with fix:

  cmake -B build -DLLAMA_BUILD_TESTS=ON -DLLAMA_BUILD_TOOLS=OFF
  cmake --build build --target test-chat
  ./build/bin/test-chat

* Update test-chat.cpp

* Update test-chat.cpp

* Update test-chat.cpp

---------

Co-authored-by: Piotr Wilkin (ilintar) <piotr.wilkin@syndatis.com>

jinja : fix heap OOB read in value equality comparison (#20782)

Address GHSA-q9j6-4hhc-rq9p and GHSA-2q4c-9gq5-5vfp.

The three-iterator overload of std::equal in value_array_t::equivalent()
and value_object_t::equivalent() reads past the end of the shorter
container when comparing arrays or objects of different lengths.

Use the four-iterator overload (C++14) which checks both range lengths.

Found-by: Pwno

common : fix typo in debug log ('extracft' -> 'extract') (#20807)

common/parser: fix nasty bug causing subtle corruption of generation prompt (#20825)

jinja : refactor token advancement (#20864)

* refactor token advancement

* exercise sub-expressions

common/autoparser : detect reasoning markers when enable_thinking changes system prompt (#20859)

common : replace wrap_for_generation with a prefix convenience function and fix gpt-oss (#20912)

jinja: fix macro with kwargs (#20960)

* jinja: fix macro with kwargs

* Apply suggestions from code review

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* fix newline problem

---------

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

common : inhibit lazy grammar sampler while reasoning is active (#20970)

* common : inhibit grammar while reasoning budget is active

* cont : update force_pos in accept

* cont : fix tests

* cont : tweak should apply logic

* cont : return early not using grammar sampler

* Add tests

* cont : prevent backend sampling when reasoning budget enabled

* cont : fix typo

---------

Co-authored-by: Piotr Wilkin <piotr.wilkin@syndatis.com>
# Conflicts:
#	common/reasoning-budget.h
#	common/sampling.cpp
#	tools/cli/cli.cpp
#	tools/server/server-common.cpp
#	tools/server/server-task.cpp

common/parser: fix reasoning whitespace bugs + extra parser tests (#21085)

* fix whitespace reasoning issues + add reconstruction tests

* Proper fix

* fix Nemotron autoparser test expectations to include newline in marker

common : add reasoning_format = none support to gpt-oss (#21094)

common/json-schema: fix: handle non-capturing groups (?:...) in JSON schema pattern converter (#21124)

The regex-to-grammar converter in _visit_pattern() crashes with SIGSEGV
when a JSON schema "pattern" field contains a non-capturing group (?:...).

Root cause: when the parser sees '(' followed by '?', it pushes a warning
but does not advance past '?:'. The recursive transform() call then
interprets '?' as a quantifier and calls seq.back() on an empty vector,
causing undefined behavior.

This commonly occurs when serving OpenAI-compatible tool calls from
clients that include complex regex patterns in their JSON schemas (e.g.,
date validation patterns like ^(?:(?:\d\d[2468][048]|...)-02-29|...)$).

The fix:
- Skip '?:' after '(' to treat non-capturing groups as regular groups
- For unsupported syntax (?=, ?!, etc.), skip to matching ')' safely,
  handling escaped characters to avoid miscounting parenthesis depth
- Adjust the ')' unbalanced-parentheses check using direct char
  comparisons instead of substr
- Add test cases for non-capturing groups (C++ only, as the JS/Python
  implementations do not yet support this syntax)

common/parser: fix handling of tool definition with missing properties key (#21128)

jinja : handle empty expressions correctly (#20913)

* Reject empty computed member expressions before returning slices[0] from parse_member_expression_arguments().

* Treat empty computed member expressions with Jinja2 undefined semantics

Treat empty computed member expressions like `a[]` as undefined instead of
raising a parser error, to match Jinja2 behavior.

- return a noop expression for empty computed member arguments
- return undefined when a computed member key evaluates to undefined
- add Jinja tests covering `a[]|default('fallback')` and `a[] is undefined`

* Handle undefined computed member properties

Move undefined-property handling to the common member access path, and add a test covering `a[undefined] is undefined`.

* Use default undefined value in member access

Initialize val and then return it when property is undefined.

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* empty statement parses to blank_expression instead of noop_statement

---------

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

common : gpt-oss handle builtin and unsolicited tool calls (#21213)

fix: tool call parsing for LFM2 and LFM2.5 models (#21242)

* fix: tool call parsing for LFM2 and LFM2.5 models'

* refactor: add test / break out lfm2 and lfm2.5 parsing logic
# Conflicts:
#	common/chat.cpp

Relax prefill parser to allow space. (#21240)

* Relax prefill parser to allow space.

* Move changes from prefix() to parser generation

* Only allow spaces if we're not having a pure content parser next

common : add commentary rules for gpt-oss-20b (#21286)

add reasoning budget

model, mtmd: fix gguf conversion for audio/vision mmproj (#21309)

* fix gguf conversion for audio/vision mmproj

* fix test
# Conflicts:
#	convert_hf_to_gguf.py
#	examples/eval-callback/eval-callback.cpp
#	examples/mtmd/CMakeLists.txt
#	examples/mtmd/clip-impl.h
#	examples/mtmd/mtmd.cpp
#	gguf-py/gguf/constants.py
#	gguf-py/gguf/gguf_writer.py
#	gguf-py/gguf/tensor_mapping.py
#	src/CMakeLists.txt
#	src/llama-arch.cpp
#	src/llama-arch.h
#	src/llama-model.cpp
#	src/llama-model.h
#	src/llama-vocab.cpp
#	src/models/models.h
#	tests/test-llama-archs.cpp
#	tools/mtmd/clip-graph.h
#	tools/mtmd/clip-model.h
#	tools/mtmd/clip.cpp
#	tools/mtmd/models/models.h

fix: gemma 4 template (#21326)

chat : avoid including json in chat.h (#21306)

jinja: coerce input for string-specific filters (#21370)

common : fix tool call type detection for nullable and enum schemas (#21327)

* common : fix tool call type detection for nullable and enum schemas

* common, tests : fix grammar delegation for nullable/enum schemas and add tests

Fix enum type inference to scan all enum values (not just index 0) so
schemas like {"enum": [0, "celsius"]} correctly detect string type.

Fix schema_delegates in peg-parser to handle nullable type arrays
(["string", "null"]) and typeless enum schemas in raw mode, allowing
the tagged parser to use raw text instead of JSON-formatted strings.

Add test cases for Qwen3-Coder (TAG_WITH_TAGGED format):
- nullable string ["string", "null"]
- nullable string with null first ["null", "string"]
- nullable integer ["integer", "null"]
- enum without explicit type key

common/parser: fix call ID detection (Mistral parser mostly) + atomicity for tag-json parsers (#21230)

* Fix call ID detection (Mistral parser mostly) + atomicity for tag-json parsers

* Rename

* Update common/chat-auto-parser-generator.cpp

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

---------

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

common : add gemma 4 specialized parser (#21418)

* common : add gemma4 dedicated parser

* cont : add '<|tool_response>' as eog

* cont : emit JSON from Gemma4 tool call AST

* cont : more fixes

* cont : refactor convert function

* cont : refine rules and mapping

* cont : add more tests

* cont : clean up

* cont : remove autoparser gemma4 implementation

* cont : more cleanup

* cont : rename gemma4.jinja to match the others

* cont : add custom template to support interleaved thinking

* cont : preserve reasoning in model turns

* cont : fix initializer error

* cont : fix unused vars

* cont : fix accidental static

* cont : fix specialized_template signature

* fix extra semicolon

* remove debug line and extra space [no ci]

fix reasoning budget

parser: fix MiniMax handling (#21573)

jinja : support ensure_ascii=true, string repetition and int/float self-filtering (#21623)

* feat: jinja engine improvements for reka-edge

Port three Jinja engine improvements needed for the reka-edge model:
1. Python-style string repetition ("ab" * 3 → "ababab")
2. ensure_ascii=true support for tojson filter (escapes non-ASCII to \uXXXX)
3. int() builtin on value_int_t (identity, needed for Reka Edge template)

* fix: escape invalid utf8 bytes when ensure_ascii=true

The json_ensure_ascii_preserving_format function does not correctly
handle an edge case where if UTF-8 parsing fails, it adds the non-ascii
character back to the output as a raw byte.

This commit fixes that by adding the unicode standard replacement
character \\ufffd to the output instead. This is the standard behavior
for various programming languages like Python, Rust, Go, etc.

* chore: address PR comments

1. Add todo comment for supporting string repetition for array/tuples
2. Add support for float identity operation
3. Move invalid ascii test case to test_fuzzing

* chore: accept suggestion for common/jinja/value.cpp

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

---------

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

common : simplify autoparser tagged parser rules (#21216)

* common : simplify autoparser tagged parser rules

* cont : remove upper limit on optional args

* cont : revert changes to parsing at the end

* cont : undo arbitrary ordering of optional args

* cont : fix uninitialized required parameters

* revert to simplify merge

* re-apply patches

* restore flexible optional arg ordering tests

common : fix ambiguous grammar rule in gemma4 (#21661)

* common : fix ambiguous grammar rule in gemma4

* cont : fix missing comma...

common : enable reasoning budget sampler for gemma4 (#21697)

* fix: enable reasoning budget sampler for gemma4

Add thinking_start_tag and thinking_end_tag to
common_chat_params_init_gemma4(). Without these, the reasoning
budget sampler never activates for gemma4.

Make the newline after "thought" optional in the PEG parser to
handle budget=0 (sampler forces end tag before the newline).

Add test case for empty thinking block.

Fixes #21487

* use p.space() instead of p.optional(p.literal("\n")) in gemma4 thought parser

common : better align to the updated official gemma4 template (#21704)

fix: Fix broken structured output when using $refs in json_schema (#21699)

chat: dedicated DeepSeek v3.2 parser + "official" template (#21785)
@firecoperana firecoperana marked this pull request as ready for review April 14, 2026 01:22
@firecoperana
Copy link
Copy Markdown
Collaborator Author

@jukofyork I deleted that warning message.

I think this is ready for review. Mainline has about 50 commits after the main auto parser PR was merged.
Hope my port is working and matches mainline's performance.

aldehir and others added 3 commits April 13, 2026 20:44
…1870)

* common: skip reasoning budget sampler when no budget is requested

After I added thinking_start_tag / thinking_end_tag for gemma4 in #21697, the reasoning budget sampler gets unconditionally created even when no budget is configured (the default -1). The same applies to kimi_k2, lfm2, lfm2_5, and ministral_3 which also set these tags. The budget gets converted to INT_MAX, so the sampler never actually forces any tokens but still runs per-token checks (start tag matching in IDLE state, token-to-piece conversion + UTF-8 checks in COUNTING state).

More importantly, the mere existence of the sampler (non-null rbudget) disables backend sampling. Backend sampling lets the GPU select tokens directly, avoiding a full logits transfer from GPU to CPU every token. This could explain the 30% speed regression reported in #21784 (98 t/s to 70 t/s on Vulkan).

So I added a reasoning_budget_tokens >= 0 check to the sampler creation condition. When the budget is unlimited, the sampler is not created, backend sampling stays enabled, and no per-token overhead is added. When a budget is explicitly set (0, 128, 1024, etc.), the sampler is created and works as before.

* common: preserve rbudget when grammar is lazy

Following up on the review feedback on #21870: keep the reasoning budget sampler when grammar_lazy is true, so the thinking-block grammar suppression from #20970 still works when tools are in use. This way, we only skip the sampler when both no budget is set AND grammar is not lazy.
# Conflicts:
#	common/sampling.cpp
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.